#kubernetes network policy explained
Explore tagged Tumblr posts
Text
CNAPP Explained: The Smartest Way to Secure Cloud-Native Apps with EDSPL

Introduction: The New Era of Cloud-Native Apps
Cloud-native applications are rewriting the rules of how we build, scale, and secure digital products. Designed for agility and rapid innovation, these apps demand security strategies that are just as fast and flexible. Thatâs where CNAPPâCloud-Native Application Protection Platformâcomes in.
But simply deploying CNAPP isnât enough.
You need the right strategy, the right partner, and the right security intelligence. Thatâs where EDSPL shines.
What is CNAPP? (And Why Your Business Needs It)
CNAPP stands for Cloud-Native Application Protection Platform, a unified framework that protects cloud-native apps throughout their lifecycleâfrom development to production and beyond.
Instead of relying on fragmented tools, CNAPP combines multiple security services into a cohesive solution:
Cloud Security
Vulnerability management
Identity access control
Runtime protection
DevSecOps enablement
In short, it covers the full spectrumâfrom your code to your container, from your workload to your network security.
Why Traditional Security Isnât Enough Anymore
The old way of securing applications with perimeter-based tools and manual checks doesnât work for cloud-native environments. Hereâs why:
Infrastructure is dynamic (containers, microservices, serverless)
Deployments are continuous
Apps run across multiple platforms
You need security that is cloud-aware, automated, and context-richâall things that CNAPP and EDSPLâs services deliver together.
Core Components of CNAPP
Letâs break down the core capabilities of CNAPP and how EDSPL customizes them for your business:
1. Cloud Security Posture Management (CSPM)
Checks your cloud infrastructure for misconfigurations and compliance gaps.
See how EDSPL handles cloud security with automated policy enforcement and real-time visibility.
2. Cloud Workload Protection Platform (CWPP)
Protects virtual machines, containers, and functions from attacks.
This includes deep integration with application security layers to scan, detect, and fix risks before deployment.
3. CIEM: Identity and Access Management
Monitors access rights and roles across multi-cloud environments.
Your network, routing, and storage environments are covered with strict permission models.
4. DevSecOps Integration
CNAPP shifts security leftâearly into the DevOps cycle. EDSPLâs managed services ensure security tools are embedded directly into your CI/CD pipelines.
5. Kubernetes and Container Security
Containers need runtime defense. Our approach ensures zero-day protection within compute environments and dynamic clusters.
How EDSPL Tailors CNAPP for Real-World Environments
Every organizationâs tech stack is unique. Thatâs why EDSPL never takes a one-size-fits-all approach. We customize CNAPP for your:
Cloud provider setup
Mobility strategy
Data center switching
Backup architecture
Storage preferences
This ensures your entire digital ecosystem is secure, streamlined, and scalable.
Case Study: CNAPP in Action with EDSPL
The Challenge
A fintech company using a hybrid cloud setup faced:
Misconfigured services
Shadow admin accounts
Poor visibility across Kubernetes
EDSPLâs Solution
Integrated CNAPP with CIEM + CSPM
Hardened their routing infrastructure
Applied real-time runtime policies at the node level
â
 The Results
75% drop in vulnerabilities
Improved time to resolution by 4x
Full compliance with ISO, SOC2, and GDPR
Why EDSPLâs CNAPP Stands Out
While most providers stop at integration, EDSPL goes beyond:
đšÂ End-to-End Security: From app code to switching hardware, every layer is secured. đšÂ Proactive Threat Detection: Real-time alerts and behavior analytics. đšÂ Customizable Dashboards: Unified views tailored to your team. đšÂ 24x7 SOC Support: With expert incident response. đšÂ Future-Proofing: Our background vision keeps you ready for whatâs next.
EDSPLâs Broader Capabilities: CNAPP and Beyond
While CNAPP is essential, your digital ecosystem needs full-stack protection. EDSPL offers:
Network security
Application security
Switching and routing solutions
Storage and backup services
Mobility and remote access optimization
Managed and maintenance services for 24x7 support
Whether youâre building apps, protecting data, or scaling globally, we help you do it securely.
Letâs Talk CNAPP
Youâve read the what, why, and how of CNAPP â now itâs time to act.
đŠÂ Reach us for a free CNAPP consultation. đ Or get in touch with our cloud security specialists now.
Secure your cloud-native future with EDSPL â because prevention is always smarter than cure.
0 notes
Text
OpenShift vs Kubernetes: Key Differences Explained
Kubernetes has become the de facto standard for container orchestration, enabling organizations to manage and scale containerized applications efficiently. However, OpenShift, built on top of Kubernetes, offers additional features that streamline development and deployment. While they share core functionalities, they have distinct differences that impact their usability. In this blog, we explore the key differences between OpenShift and Kubernetes.
1. Core Overview
Kubernetes:
Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and operation of application containers. It provides the building blocks for containerized workloads but requires additional tools for complete enterprise-level functionality.
OpenShift:
OpenShift is a Kubernetes-based container platform developed by Red Hat. It provides additional features such as a built-in CI/CD pipeline, enhanced security, and developer-friendly tools to simplify Kubernetes management.
2. Installation & Setup
Kubernetes:
Requires manual installation and configuration.
Cluster setup involves configuring multiple components such as kube-apiserver, kube-controller-manager, kube-scheduler, and networking.
Offers flexibility but requires expertise to manage.
OpenShift:
Provides an easier installation process with automated scripts.
Includes a fully integrated web console for management.
Requires Red Hat OpenShift subscriptions for enterprise-grade support.
3. Security & Authentication
Kubernetes:
Security policies and authentication need to be manually configured.
Role-Based Access Control (RBAC) is available but requires additional setup.
OpenShift:
Comes with built-in security features.
Uses Security Context Constraints (SCCs) for enhanced security.
Integrated authentication mechanisms, including OAuth and LDAP support.
4. Networking
Kubernetes:
Uses third-party plugins (e.g., Calico, Flannel, Cilium) for networking.
Network policies must be configured separately.
OpenShift:
Uses Open vSwitch-based SDN by default.
Provides automatic service discovery and routing.
Built-in router and HAProxy-based load balancing.
5. Development & CI/CD Integration
Kubernetes:
Requires third-party tools for CI/CD (e.g., Jenkins, ArgoCD, Tekton).
Developers must integrate CI/CD pipelines manually.
OpenShift:
Comes with built-in CI/CD capabilities via OpenShift Pipelines.
Source-to-Image (S2I) feature allows developers to build images directly from source code.
Supports GitOps methodologies out of the box.
6. User Interface & Management
Kubernetes:
Managed through the command line (kubectl) or third-party UI tools (e.g., Lens, Rancher).
No built-in dashboard; requires separate installation.
OpenShift:
Includes a built-in web console for easier management.
Provides graphical interfaces for monitoring applications, logs, and metrics.
7. Enterprise Support & Cost
Kubernetes:
Open-source and free to use.
Requires skilled teams to manage and maintain infrastructure.
Support is available from third-party providers.
OpenShift:
Requires a Red Hat subscription for enterprise support.
Offers enterprise-grade stability, support, and compliance features.
Managed OpenShift offerings are available via cloud providers (AWS, Azure, GCP).
Conclusion
Both OpenShift and Kubernetes serve as powerful container orchestration platforms. Kubernetes is highly flexible and widely adopted, but it demands expertise for setup and management. OpenShift, on the other hand, simplifies the experience with built-in security, networking, and developer tools, making it a strong choice for enterprises looking for a robust, supported Kubernetes distribution.
Choosing between them depends on your organization's needs: if you seek flexibility and open-source freedom, Kubernetes is ideal; if you prefer an enterprise-ready solution with out-of-the-box tools, OpenShift is the way to go.
For more details click www.hawkstack.comÂ
0 notes
Text
Mastering OpenShift Administration II: Advanced Techniques and Best Practices
Introduction
Briefly introduce OpenShift as a leading Kubernetes platform for managing containerized applications.
Mention the significance of advanced administration skills for managing and scaling enterprise-level environments.
Highlight that this blog post will cover key concepts and techniques from the OpenShift Administration II course.
Section 1: Understanding OpenShift Administration II
Explain what OpenShift Administration II covers.
Mention the prerequisites for this course (e.g., knowledge of OpenShift Administration I, basics of Kubernetes, containerization, and Linux system administration).
Describe the importance of this course for professionals looking to advance their OpenShift and Kubernetes skills.
Section 2: Key Concepts and Techniques
Advanced Cluster Management
Managing and scaling clusters efficiently.
Techniques for deploying multiple clusters in different environments (hybrid or multi-cloud).
Best practices for disaster recovery and fault tolerance.
Automating OpenShift Operations
Introduction to automation in OpenShift using Ansible and other automation tools.
Writing and executing playbooks to automate day-to-day administrative tasks.
Streamlining OpenShift updates and upgrades with automation scripts.
Optimizing Resource Usage
Best practices for resource optimization in OpenShift clusters.
Managing workloads with resource quotas and limits.
Performance tuning techniques for maximizing cluster efficiency.
Section 3: Security and Compliance
Overview of security considerations in OpenShift environments.
Role-based access control (RBAC) to manage user permissions.
Implementing network security policies to control traffic within the cluster.
Ensuring compliance with industry standards and best practices.
Section 4: Troubleshooting and Performance Tuning
Common issues encountered in OpenShift environments and how to resolve them.
Tools and techniques for monitoring cluster health and diagnosing problems.
Performance tuning strategies to ensure optimal OpenShift performance.
Section 5: Real-World Use Cases
Share some real-world scenarios where OpenShift Administration II skills are applied.
Discuss how advanced OpenShift administration techniques can help enterprises achieve their business goals.
Highlight the role of OpenShift in modern DevOps and CI/CD pipelines.
Conclusion
Summarize the key takeaways from the blog post.
Encourage readers to pursue the OpenShift Administration II course to elevate their skills.
Mention any upcoming training sessions or resources available on platforms like HawkStack for those interested in OpenShift.
For more details click www.hawkstack.com
#redhatcourses#information technology#containerorchestration#docker#kubernetes#container#linux#containersecurity#dockerswarm
1 note
¡
View note
Text
Hybrid Cloud Strategies for Modern Operations Explained
By combining these two cloud models, organizations can enhance flexibility, scalability, and security while optimizing costs and performance. This article explores effective hybrid cloud strategies for modern operations and how they can benefit your organization. Understanding Hybrid Cloud What is Hybrid Cloud? A hybrid cloud is an integrated cloud environment that combines private cloud (on-premises or hosted) and public cloud services. This model allows organizations to seamlessly manage workloads across both cloud environments, leveraging the benefits of each while addressing specific business needs and regulatory requirements. Benefits of Hybrid Cloud - Flexibility: Hybrid cloud enables organizations to choose the optimal environment for each workload, enhancing operational flexibility. - Scalability: By utilizing public cloud resources, organizations can scale their infrastructure dynamically to meet changing demands. - Cost Efficiency: Hybrid cloud allows organizations to optimize costs by balancing between on-premises investments and pay-as-you-go cloud services. - Enhanced Security: Sensitive data can be kept in a private cloud, while less critical workloads can be run in the public cloud, ensuring compliance and security. Key Hybrid Cloud Strategies 1. Workload Placement and Optimization Assessing Workload Requirements Evaluate the specific requirements of each workload, including performance, security, compliance, and cost considerations. Determine which workloads are best suited for the private cloud and which can benefit from the scalability and flexibility of the public cloud. Dynamic Workload Management Implement dynamic workload management to move workloads between private and public clouds based on real-time needs. Use tools like VMware Cloud on AWS, Azure Arc, or Google Anthos to manage hybrid cloud environments efficiently. 2. Unified Management and Orchestration Centralized Management Platforms Utilize centralized management platforms to monitor and manage resources across both private and public clouds. Tools like Microsoft Azure Stack, Google Cloud Anthos, and Red Hat OpenShift provide a unified interface for managing hybrid environments, ensuring consistent policies and governance. Automation and Orchestration Automation and orchestration tools streamline operations by automating routine tasks and managing complex workflows. Use tools like Kubernetes for container orchestration and Terraform for infrastructure as code (IaC) to automate deployment, scaling, and management across hybrid cloud environments. 3. Security and Compliance Implementing Robust Security Measures Security is paramount in hybrid cloud environments. Implement comprehensive security measures, including multi-factor authentication (MFA), encryption, and regular security audits. Use security tools like AWS Security Hub, Azure Security Center, and Google Cloud Security Command Center to monitor and manage security across the hybrid cloud. Ensuring Compliance Compliance with industry regulations and standards is essential for maintaining data integrity and security. Ensure that your hybrid cloud strategy adheres to relevant regulations, such as GDPR, HIPAA, and PCI DSS. Implement policies and procedures to protect sensitive data and maintain audit trails. 4. Networking and Connectivity Hybrid Cloud Connectivity Solutions Establish secure and reliable connectivity between private and public cloud environments. Use solutions like AWS Direct Connect, Azure ExpressRoute, and Google Cloud Interconnect to create dedicated network connections that enhance performance and security. Network Segmentation and Security Implement network segmentation to isolate and protect sensitive data and applications. Use virtual private networks (VPNs) and virtual LANs (VLANs) to segment networks and enforce security policies. Regularly monitor network traffic for anomalies and potential threats. 5. Disaster Recovery and Business Continuity Implementing Hybrid Cloud Backup Solutions Ensure business continuity by implementing hybrid cloud backup solutions. Use tools like AWS Backup, Azure Backup, and Google Cloud Backup to create automated backup processes that store data across multiple locations, providing redundancy and protection against data loss. Developing a Disaster Recovery Plan A comprehensive disaster recovery plan outlines the steps to take in the event of a major disruption. Ensure that your plan includes procedures for data restoration, failover mechanisms, and communication protocols. Regularly test your disaster recovery plan to ensure its effectiveness and make necessary adjustments. 6. Cost Management and Optimization Monitoring and Analyzing Cloud Costs Use cost monitoring tools like AWS Cost Explorer, Azure Cost Management, and Google Cloudâs cost management tools to track and analyze your cloud spending. Identify areas where you can reduce costs and implement optimization strategies, such as rightsizing resources and eliminating unused resources. Leveraging Cost-Saving Options Optimize costs by leveraging cost-saving options offered by cloud providers. Use reserved instances, spot instances, and committed use contracts to reduce expenses. Evaluate your workload requirements and choose the most cost-effective pricing models for your needs. Case Study: Hybrid Cloud Strategy in a Financial Services Company Background A financial services company needed to enhance its IT infrastructure to support growth and comply with stringent regulatory requirements. The company adopted a hybrid cloud strategy to balance the need for flexibility, scalability, and security. Solution The company assessed its workload requirements and placed critical financial applications and sensitive data in a private cloud to ensure compliance and security. Less critical workloads, such as development and testing environments, were moved to the public cloud to leverage its scalability and cost-efficiency. Centralized management and orchestration tools were implemented to manage resources across the hybrid environment. Robust security measures, including encryption, MFA, and regular audits, were put in place to protect data and ensure compliance. The company also established secure connectivity between private and public clouds and developed a comprehensive disaster recovery plan. Results The hybrid cloud strategy enabled the financial services company to achieve greater flexibility, scalability, and cost-efficiency. The company maintained compliance with regulatory requirements while optimizing performance and reducing operational costs. Adopting hybrid cloud strategies can significantly enhance modern operations by providing flexibility, scalability, and security. By leveraging the strengths of both private and public cloud environments, organizations can optimize costs, improve performance, and ensure compliance. Implementing these strategies requires careful planning and the right tools, but the benefits are well worth the effort. Read the full article
0 notes
Text
Hybrid Cloud Strategies for Modern Operations Explained
By combining these two cloud models, organizations can enhance flexibility, scalability, and security while optimizing costs and performance. This article explores effective hybrid cloud strategies for modern operations and how they can benefit your organization. Understanding Hybrid Cloud What is Hybrid Cloud? A hybrid cloud is an integrated cloud environment that combines private cloud (on-premises or hosted) and public cloud services. This model allows organizations to seamlessly manage workloads across both cloud environments, leveraging the benefits of each while addressing specific business needs and regulatory requirements. Benefits of Hybrid Cloud - Flexibility: Hybrid cloud enables organizations to choose the optimal environment for each workload, enhancing operational flexibility. - Scalability: By utilizing public cloud resources, organizations can scale their infrastructure dynamically to meet changing demands. - Cost Efficiency: Hybrid cloud allows organizations to optimize costs by balancing between on-premises investments and pay-as-you-go cloud services. - Enhanced Security: Sensitive data can be kept in a private cloud, while less critical workloads can be run in the public cloud, ensuring compliance and security. Key Hybrid Cloud Strategies 1. Workload Placement and Optimization Assessing Workload Requirements Evaluate the specific requirements of each workload, including performance, security, compliance, and cost considerations. Determine which workloads are best suited for the private cloud and which can benefit from the scalability and flexibility of the public cloud. Dynamic Workload Management Implement dynamic workload management to move workloads between private and public clouds based on real-time needs. Use tools like VMware Cloud on AWS, Azure Arc, or Google Anthos to manage hybrid cloud environments efficiently. 2. Unified Management and Orchestration Centralized Management Platforms Utilize centralized management platforms to monitor and manage resources across both private and public clouds. Tools like Microsoft Azure Stack, Google Cloud Anthos, and Red Hat OpenShift provide a unified interface for managing hybrid environments, ensuring consistent policies and governance. Automation and Orchestration Automation and orchestration tools streamline operations by automating routine tasks and managing complex workflows. Use tools like Kubernetes for container orchestration and Terraform for infrastructure as code (IaC) to automate deployment, scaling, and management across hybrid cloud environments. 3. Security and Compliance Implementing Robust Security Measures Security is paramount in hybrid cloud environments. Implement comprehensive security measures, including multi-factor authentication (MFA), encryption, and regular security audits. Use security tools like AWS Security Hub, Azure Security Center, and Google Cloud Security Command Center to monitor and manage security across the hybrid cloud. Ensuring Compliance Compliance with industry regulations and standards is essential for maintaining data integrity and security. Ensure that your hybrid cloud strategy adheres to relevant regulations, such as GDPR, HIPAA, and PCI DSS. Implement policies and procedures to protect sensitive data and maintain audit trails. 4. Networking and Connectivity Hybrid Cloud Connectivity Solutions Establish secure and reliable connectivity between private and public cloud environments. Use solutions like AWS Direct Connect, Azure ExpressRoute, and Google Cloud Interconnect to create dedicated network connections that enhance performance and security. Network Segmentation and Security Implement network segmentation to isolate and protect sensitive data and applications. Use virtual private networks (VPNs) and virtual LANs (VLANs) to segment networks and enforce security policies. Regularly monitor network traffic for anomalies and potential threats. 5. Disaster Recovery and Business Continuity Implementing Hybrid Cloud Backup Solutions Ensure business continuity by implementing hybrid cloud backup solutions. Use tools like AWS Backup, Azure Backup, and Google Cloud Backup to create automated backup processes that store data across multiple locations, providing redundancy and protection against data loss. Developing a Disaster Recovery Plan A comprehensive disaster recovery plan outlines the steps to take in the event of a major disruption. Ensure that your plan includes procedures for data restoration, failover mechanisms, and communication protocols. Regularly test your disaster recovery plan to ensure its effectiveness and make necessary adjustments. 6. Cost Management and Optimization Monitoring and Analyzing Cloud Costs Use cost monitoring tools like AWS Cost Explorer, Azure Cost Management, and Google Cloudâs cost management tools to track and analyze your cloud spending. Identify areas where you can reduce costs and implement optimization strategies, such as rightsizing resources and eliminating unused resources. Leveraging Cost-Saving Options Optimize costs by leveraging cost-saving options offered by cloud providers. Use reserved instances, spot instances, and committed use contracts to reduce expenses. Evaluate your workload requirements and choose the most cost-effective pricing models for your needs. Case Study: Hybrid Cloud Strategy in a Financial Services Company Background A financial services company needed to enhance its IT infrastructure to support growth and comply with stringent regulatory requirements. The company adopted a hybrid cloud strategy to balance the need for flexibility, scalability, and security. Solution The company assessed its workload requirements and placed critical financial applications and sensitive data in a private cloud to ensure compliance and security. Less critical workloads, such as development and testing environments, were moved to the public cloud to leverage its scalability and cost-efficiency. Centralized management and orchestration tools were implemented to manage resources across the hybrid environment. Robust security measures, including encryption, MFA, and regular audits, were put in place to protect data and ensure compliance. The company also established secure connectivity between private and public clouds and developed a comprehensive disaster recovery plan. Results The hybrid cloud strategy enabled the financial services company to achieve greater flexibility, scalability, and cost-efficiency. The company maintained compliance with regulatory requirements while optimizing performance and reducing operational costs. Adopting hybrid cloud strategies can significantly enhance modern operations by providing flexibility, scalability, and security. By leveraging the strengths of both private and public cloud environments, organizations can optimize costs, improve performance, and ensure compliance. Implementing these strategies requires careful planning and the right tools, but the benefits are well worth the effort. Read the full article
0 notes
Text
AZ-800: Administering Windows Server Hybrid Core Infrastructure
This course teaches IT professionals how to manage core Windows Server workloads and services using on-premises, hybrid, and cloud technologies. This course teaches IT professionals how to implement and manage hybrid and on-premises solutions, such as identity, management, compute, networking, and storage in a hybrid Windows Server environment. AZ-800: Administering Windows Server Hybrid Core Infrastructure
This four-day course is intended for hybrid Windows Server administrators who have experience with Windows Server and want to extend the capabilities of their on-premises environments by combining on-premises and hybrid technologies. Windows Server Hybrid administrators deploy and manage hybrid and on-premises solutions, such as identity, management, compute, networking, and storage in a Windows Server hybrid environment. Docker y Kubernetes: Docker Swarm
Module 1: Identity services in Windows Server
This module introduces identity services and describes Active Directory Domain Services (AD DS) in a Windows Server environment. The module describes how to deploy domain controllers in AD DS, as well as Azure Active Directory (AD) and the benefits of integrating Azure AD with AD DS. The module also covers Group Policy basics and how to configure group policy objects (GPOs) in a domain environment.
lesson
Introduction to AD DS
Manage AD DS domain controllers and FSMO roles
Implement Group Policy Objects
Manage advanced features of AD DS
Lab: Implementing identity services and Group Policy
Deploying a new domain controller on Server Core
Configuring Group Policy
After completing this module, students will be able to:
Describes AD DS in a Windows Server environment.
Deploy domain controllers in AD DS.
Describes Azure AD and benefits of integrating Azure AD with AD DS.
Explain Group Policy basics and configure GPOs in a domain environment.
Module 2: Implementing identity in hybrid scenarios
This module discusses how to configure an Azure environment so that Windows IaaS workloads requiring Active Directory are supported. The module also covers integration of on-premises Active Directory Domain Services (AD DS) environment into Azure. Finally, the module explains how to extend an existing Active Directory environment into Azure by placing IaaS VMs configured as domain controllers onto a specially configured Azure virtual network subnet.
lesson
Implement hybrid identity with Windows Server
Deploy and manage Azure IaaS Active Directory domain controllers in Azure
Lab: Implementing integration between AD DS and Azure AD
Preparing Azure AD for AD DS integration
Preparing on-premises AD DS for Azure AD integration
Downloading, installing, and configuring Azure AD Connect
Verifying integration between AD DS and Azure AD
Implementing Azure AD integration features in AD DS
After completing this module, students will be able to:
Integrate on-premises Active Directory Domain Services (AD DS) environment into Azure.
Install and configure directory synchronization using Azure AD Connect.
Deploy and configure Azure AD DS.
Implement Seamless Single Sign-on (SSO).
Deploy and configure Azure AD DS.
Install a new AD DS forest on an Azure VNet.
Module 3: Windows Server administration
This module describes how to implement the principle of least privilege through Privileged Access Workstation (PAW) and Just Enough Administration (JEA). The module also highlights several common Windows Server administration tools, such as Windows Admin Center, Server Manager, and PowerShell. This module also describes the post-installation configuration process and tools available to use for this process, such as sconfig and Desired State Configuration (DSC).
lesson
Perform Windows Server secure administration
Describes Windows Server administration tools
Perform post-installation configuration of Windows Server
Just Enough Administration in Windows Server
Lab: Managing Windows Server
Implementing and using remote server administration
After completing this module, students will be able to:
Explain at least privilege administrative models.
Decide when to use privileged access workstations.
Select the most appropriate Windows Server administration tool for a given situation.
Apply different methods to perform post-installation configuration of Windows Server.
Constrain privileged administrative operations by using Just Enough Administration (JEA).
Module 4: Facilitating hybrid management
This module covers tools that facilitate managing Windows IaaS VMs remotely. The module also covers how to use Azure Arc with on-premises server instances, how to deploy Azure policies with Azure Arc, and how to use role-based access control (RBAC) to restrict access to Log Analytics data.
lesson
Administer and manage Windows Server IaaS virtual machines remotely
Manage hybrid workloads with Azure Arc
Lab: Using Windows Admin Center in hybrid scenarios
Provisioning Azure VMs running Windows Server
Implementing hybrid connectivity by using the Azure Network Adapter
Deploying Windows Admin Center gateway in Azure
Verifying functionality of the Windows Admin Center gateway in Azure
After completing this module, students will be able to:
Select appropriate tools and techniques to manage Windows IaaS VMs remotely.
Explain how to onboard on-premises Windows Server instances in Azure Arc.
Connect hybrid machines to Azure from the Azure portal.
Use Azure Arc to manage devices.
Restrict access using RBAC.
Module 5: Hyper-V virtualization in Windows Server
This module describes how to implement and configure Hyper-V VMs and containers. The module covers key features of Hyper-V in Windows Server, describes VM settings, and how to configure VMs in Hyper-V. The module also covers security technologies used with virtualization, such as shielded VMs, Host Guardian Service, admin-trusted and TPM-trusted attestation, and Key Protection Service (KPS). Finally, this module covers how to run containers and container workloads, and how to orchestrate container workloads on Windows Server using Kubernetes.
lesson
Configure and manage Hyper-V
Configure and manage Hyper-V virtual machines
Secure Hyper-V workloads
Run containers on Windows Server
Orchestrate containers on Windows Server using Kubernetes
Lab: Implementing and configuring virtualization in Windows Server
Creating and configuring VMs
Installing and configuring containers
After completing this module, students will be able to:
Install and configure Hyper-V on Windows Server.
Configure and manage Hyper-V virtual machines.
Use Host Guardian Service to protect virtual machines.
Create and deploy shielded virtual machines.
Configure and manage container workloads.
Orchestrate container workloads using a Kubernetes cluster.
0 notes
Text
Kubernetes Network Policies Tutorial for Devops Engineers Beginners and Students Â
Hi, a new #video on #kubernetes #networkpolicy is published on #codeonedigest #youtube channel. Learn #kubernetesnetworkpolicy #node #docker #container #cloud #aws #azure #programming #coding with #codeonedigest @java #java #awscloud @awscloud @AWSCloudI
In kubernetes cluster, by default, any pod can talk to any other pod with no restriction hence we need Network Policy to control the traffic flow. Network policy resource allows us to restrict the ingress and egress traffic to/from the pods. Network Policy is a standardized Kubernetes object to control the network traffic between Kubernetes pods, namespaces and the cluster. However, KubernetesâŚ

View On WordPress
#aws#aws cloud#azure#azure cloud#container network interface#container network interface aws#container network interface azure#container network interface Kubernetes#container network interface plugin#kubernetes#kubernetes explained#kubernetes installation#kubernetes interview questions#kubernetes network policy#kubernetes network policy example#kubernetes network policy explained#kubernetes network policy tutorial#kubernetes operator#kubernetes orchestration tutorial#kubernetes overview#kubernetes tutorial#kubernetes tutorial for beginners#orchestration
0 notes
Link
Kong Inc. released Kong for Kubernetes version 0.8 - a Kubernetes Ingress controller that works with the Kong API Gateway. The release adds Knative integration, a new cluster level Custom Resource Definition, and annotations to minimize configuration. The Kong Gateway is an open source API gateway built on top of NGINX. The Kong for Kubernetes product is composed of two parts - "a Kubernetes controller, which manages the state of Kong for K8S ingress configuration, and the Kong Gateway, which processes and manages incoming API requests", according to the announcement blog post. Most managed Kubernetes deployments on public clouds utilize the cloud-vendor provided Ingress controllers. These controllers work off the vendor's load balancer and other compute abstractions. Kubernetes deployments also have the option of using other controllers - NGINX and HAProxy being among them. InfoQ reached out to Reza Shafii, VP of Products at Kong Inc., to find out more about this release as well as Kong for Kubernetes' capabilities in general. Shafii explains that Kong for Kubernetes adds all the features that the Kong API Gateway has to the Ingress. These are API management features - capabilities that "enable dynamic policy management of API traffic such as application of OIDC-based authentication, advanced rate limiting, request caching, or streaming of logs and metrics to different analytic providers at the same time". Shafii elaborates further on the performance aspects:
These include things such as a high performance profile (e.g., sub-millisecond latency and 25K+ transactions per second) and support for multiple protocols and interaction patterns (REST, graphQL, gRPC, TCP, etc.) â while providing all of the operations of Kong Gateway through Kubernetes CRDs. This last point is important because that is what then makes the operational aspect of the ingress consistent across all cloud providers and on-premises.
Although Kong is built on top of nginx, its ingress controller has marked differences compared to the default nginx-ingress-controller as well as nginx's own commercial controller, according to Shafii. These differences lie in the API management features that Kong has, as well as the fact that for users of Kong Gateway, it offers a "consistent API management lifecycle" across Kubernetes and non-Kubernetes workloads.
Image courtesy : https://konghq.com/blog/kong-for-kubernetes-0-8-released/ (used with permission) The 0.8 release adds Ingress support for Knative. Knative is a serverless platform for container-based workloads on Kubernetes that provides "higher-level abstractions for common app use-cases". The default Ingress for Knative is based on Istio, and there are alternatives like Gloo also available. Shafii explains the differences between Knative's default Ingress and the one provided by Kong:
What we have learned from our community and customers is that most use cases donât require the weight of Istio to run serverless workloads. And in fact, the heavy weight of Istio is one of the reasons that prompted us to initiate the Kuma service mesh project. Kong Gatewayâs plugin architecture and focus on extensibility helps keep Knative workloads focused solely on business logic.
It is also possible to use a cloud vendor's load balancer with Kong's ingress controller. Shafii fills in the details:
Cloud load balancers provide a way to balance traffic across multiple Kong Gateway nodes, and Kong Gateway nodes help manage traffic to multiple services within a cluster. In fact, for most users, we recommend the use of load balancers such as those of AWS or GCP to front Kong Gateway's. This provides an endpoint inside the virtual private network in the cloud, which can then be exposed to other networks (different AWS accounts), other partner networks or directly on the internet.
Similar to network traffic metrics provided by cloud vendors like GCP and AWS, and gateways like Istio and Gloo, Kong for Kubernetes can collect "metrics such as HTTP status and error codes, traffic throughput/latency, and (ingress/egress) bandwidth consumed on a per route and services level". Health-related metrics for the Kong Gateway itself such as "connections served, connection currently in use, shared memory usage and cache hit ratio" are also available, according to Shafii. He adds that these metrics can be integrated with Prometheus, Data Dog, StatsD, Zipkin and Jaeger. The 0.8 release has breaking changes for path-based routing and some annotations are deprecated. The changelog has the complete list of changes.
0 notes
Text
Hybrid Cloud Strategies for Modern Operations Explained
By combining these two cloud models, organizations can enhance flexibility, scalability, and security while optimizing costs and performance. This article explores effective hybrid cloud strategies for modern operations and how they can benefit your organization. Understanding Hybrid Cloud What is Hybrid Cloud? A hybrid cloud is an integrated cloud environment that combines private cloud (on-premises or hosted) and public cloud services. This model allows organizations to seamlessly manage workloads across both cloud environments, leveraging the benefits of each while addressing specific business needs and regulatory requirements. Benefits of Hybrid Cloud - Flexibility: Hybrid cloud enables organizations to choose the optimal environment for each workload, enhancing operational flexibility. - Scalability: By utilizing public cloud resources, organizations can scale their infrastructure dynamically to meet changing demands. - Cost Efficiency: Hybrid cloud allows organizations to optimize costs by balancing between on-premises investments and pay-as-you-go cloud services. - Enhanced Security: Sensitive data can be kept in a private cloud, while less critical workloads can be run in the public cloud, ensuring compliance and security. Key Hybrid Cloud Strategies 1. Workload Placement and Optimization Assessing Workload Requirements Evaluate the specific requirements of each workload, including performance, security, compliance, and cost considerations. Determine which workloads are best suited for the private cloud and which can benefit from the scalability and flexibility of the public cloud. Dynamic Workload Management Implement dynamic workload management to move workloads between private and public clouds based on real-time needs. Use tools like VMware Cloud on AWS, Azure Arc, or Google Anthos to manage hybrid cloud environments efficiently. 2. Unified Management and Orchestration Centralized Management Platforms Utilize centralized management platforms to monitor and manage resources across both private and public clouds. Tools like Microsoft Azure Stack, Google Cloud Anthos, and Red Hat OpenShift provide a unified interface for managing hybrid environments, ensuring consistent policies and governance. Automation and Orchestration Automation and orchestration tools streamline operations by automating routine tasks and managing complex workflows. Use tools like Kubernetes for container orchestration and Terraform for infrastructure as code (IaC) to automate deployment, scaling, and management across hybrid cloud environments. 3. Security and Compliance Implementing Robust Security Measures Security is paramount in hybrid cloud environments. Implement comprehensive security measures, including multi-factor authentication (MFA), encryption, and regular security audits. Use security tools like AWS Security Hub, Azure Security Center, and Google Cloud Security Command Center to monitor and manage security across the hybrid cloud. Ensuring Compliance Compliance with industry regulations and standards is essential for maintaining data integrity and security. Ensure that your hybrid cloud strategy adheres to relevant regulations, such as GDPR, HIPAA, and PCI DSS. Implement policies and procedures to protect sensitive data and maintain audit trails. 4. Networking and Connectivity Hybrid Cloud Connectivity Solutions Establish secure and reliable connectivity between private and public cloud environments. Use solutions like AWS Direct Connect, Azure ExpressRoute, and Google Cloud Interconnect to create dedicated network connections that enhance performance and security. Network Segmentation and Security Implement network segmentation to isolate and protect sensitive data and applications. Use virtual private networks (VPNs) and virtual LANs (VLANs) to segment networks and enforce security policies. Regularly monitor network traffic for anomalies and potential threats. 5. Disaster Recovery and Business Continuity Implementing Hybrid Cloud Backup Solutions Ensure business continuity by implementing hybrid cloud backup solutions. Use tools like AWS Backup, Azure Backup, and Google Cloud Backup to create automated backup processes that store data across multiple locations, providing redundancy and protection against data loss. Developing a Disaster Recovery Plan A comprehensive disaster recovery plan outlines the steps to take in the event of a major disruption. Ensure that your plan includes procedures for data restoration, failover mechanisms, and communication protocols. Regularly test your disaster recovery plan to ensure its effectiveness and make necessary adjustments. 6. Cost Management and Optimization Monitoring and Analyzing Cloud Costs Use cost monitoring tools like AWS Cost Explorer, Azure Cost Management, and Google Cloudâs cost management tools to track and analyze your cloud spending. Identify areas where you can reduce costs and implement optimization strategies, such as rightsizing resources and eliminating unused resources. Leveraging Cost-Saving Options Optimize costs by leveraging cost-saving options offered by cloud providers. Use reserved instances, spot instances, and committed use contracts to reduce expenses. Evaluate your workload requirements and choose the most cost-effective pricing models for your needs. Case Study: Hybrid Cloud Strategy in a Financial Services Company Background A financial services company needed to enhance its IT infrastructure to support growth and comply with stringent regulatory requirements. The company adopted a hybrid cloud strategy to balance the need for flexibility, scalability, and security. Solution The company assessed its workload requirements and placed critical financial applications and sensitive data in a private cloud to ensure compliance and security. Less critical workloads, such as development and testing environments, were moved to the public cloud to leverage its scalability and cost-efficiency. Centralized management and orchestration tools were implemented to manage resources across the hybrid environment. Robust security measures, including encryption, MFA, and regular audits, were put in place to protect data and ensure compliance. The company also established secure connectivity between private and public clouds and developed a comprehensive disaster recovery plan. Results The hybrid cloud strategy enabled the financial services company to achieve greater flexibility, scalability, and cost-efficiency. The company maintained compliance with regulatory requirements while optimizing performance and reducing operational costs. Adopting hybrid cloud strategies can significantly enhance modern operations by providing flexibility, scalability, and security. By leveraging the strengths of both private and public cloud environments, organizations can optimize costs, improve performance, and ensure compliance. Implementing these strategies requires careful planning and the right tools, but the benefits are well worth the effort. Read the full article
0 notes
Text
Hybrid Cloud Strategies for Modern Operations Explained
By combining these two cloud models, organizations can enhance flexibility, scalability, and security while optimizing costs and performance. This article explores effective hybrid cloud strategies for modern operations and how they can benefit your organization. Understanding Hybrid Cloud What is Hybrid Cloud? A hybrid cloud is an integrated cloud environment that combines private cloud (on-premises or hosted) and public cloud services. This model allows organizations to seamlessly manage workloads across both cloud environments, leveraging the benefits of each while addressing specific business needs and regulatory requirements. Benefits of Hybrid Cloud - Flexibility: Hybrid cloud enables organizations to choose the optimal environment for each workload, enhancing operational flexibility. - Scalability: By utilizing public cloud resources, organizations can scale their infrastructure dynamically to meet changing demands. - Cost Efficiency: Hybrid cloud allows organizations to optimize costs by balancing between on-premises investments and pay-as-you-go cloud services. - Enhanced Security: Sensitive data can be kept in a private cloud, while less critical workloads can be run in the public cloud, ensuring compliance and security. Key Hybrid Cloud Strategies 1. Workload Placement and Optimization Assessing Workload Requirements Evaluate the specific requirements of each workload, including performance, security, compliance, and cost considerations. Determine which workloads are best suited for the private cloud and which can benefit from the scalability and flexibility of the public cloud. Dynamic Workload Management Implement dynamic workload management to move workloads between private and public clouds based on real-time needs. Use tools like VMware Cloud on AWS, Azure Arc, or Google Anthos to manage hybrid cloud environments efficiently. 2. Unified Management and Orchestration Centralized Management Platforms Utilize centralized management platforms to monitor and manage resources across both private and public clouds. Tools like Microsoft Azure Stack, Google Cloud Anthos, and Red Hat OpenShift provide a unified interface for managing hybrid environments, ensuring consistent policies and governance. Automation and Orchestration Automation and orchestration tools streamline operations by automating routine tasks and managing complex workflows. Use tools like Kubernetes for container orchestration and Terraform for infrastructure as code (IaC) to automate deployment, scaling, and management across hybrid cloud environments. 3. Security and Compliance Implementing Robust Security Measures Security is paramount in hybrid cloud environments. Implement comprehensive security measures, including multi-factor authentication (MFA), encryption, and regular security audits. Use security tools like AWS Security Hub, Azure Security Center, and Google Cloud Security Command Center to monitor and manage security across the hybrid cloud. Ensuring Compliance Compliance with industry regulations and standards is essential for maintaining data integrity and security. Ensure that your hybrid cloud strategy adheres to relevant regulations, such as GDPR, HIPAA, and PCI DSS. Implement policies and procedures to protect sensitive data and maintain audit trails. 4. Networking and Connectivity Hybrid Cloud Connectivity Solutions Establish secure and reliable connectivity between private and public cloud environments. Use solutions like AWS Direct Connect, Azure ExpressRoute, and Google Cloud Interconnect to create dedicated network connections that enhance performance and security. Network Segmentation and Security Implement network segmentation to isolate and protect sensitive data and applications. Use virtual private networks (VPNs) and virtual LANs (VLANs) to segment networks and enforce security policies. Regularly monitor network traffic for anomalies and potential threats. 5. Disaster Recovery and Business Continuity Implementing Hybrid Cloud Backup Solutions Ensure business continuity by implementing hybrid cloud backup solutions. Use tools like AWS Backup, Azure Backup, and Google Cloud Backup to create automated backup processes that store data across multiple locations, providing redundancy and protection against data loss. Developing a Disaster Recovery Plan A comprehensive disaster recovery plan outlines the steps to take in the event of a major disruption. Ensure that your plan includes procedures for data restoration, failover mechanisms, and communication protocols. Regularly test your disaster recovery plan to ensure its effectiveness and make necessary adjustments. 6. Cost Management and Optimization Monitoring and Analyzing Cloud Costs Use cost monitoring tools like AWS Cost Explorer, Azure Cost Management, and Google Cloudâs cost management tools to track and analyze your cloud spending. Identify areas where you can reduce costs and implement optimization strategies, such as rightsizing resources and eliminating unused resources. Leveraging Cost-Saving Options Optimize costs by leveraging cost-saving options offered by cloud providers. Use reserved instances, spot instances, and committed use contracts to reduce expenses. Evaluate your workload requirements and choose the most cost-effective pricing models for your needs. Case Study: Hybrid Cloud Strategy in a Financial Services Company Background A financial services company needed to enhance its IT infrastructure to support growth and comply with stringent regulatory requirements. The company adopted a hybrid cloud strategy to balance the need for flexibility, scalability, and security. Solution The company assessed its workload requirements and placed critical financial applications and sensitive data in a private cloud to ensure compliance and security. Less critical workloads, such as development and testing environments, were moved to the public cloud to leverage its scalability and cost-efficiency. Centralized management and orchestration tools were implemented to manage resources across the hybrid environment. Robust security measures, including encryption, MFA, and regular audits, were put in place to protect data and ensure compliance. The company also established secure connectivity between private and public clouds and developed a comprehensive disaster recovery plan. Results The hybrid cloud strategy enabled the financial services company to achieve greater flexibility, scalability, and cost-efficiency. The company maintained compliance with regulatory requirements while optimizing performance and reducing operational costs. Adopting hybrid cloud strategies can significantly enhance modern operations by providing flexibility, scalability, and security. By leveraging the strengths of both private and public cloud environments, organizations can optimize costs, improve performance, and ensure compliance. Implementing these strategies requires careful planning and the right tools, but the benefits are well worth the effort. Read the full article
0 notes
Text
Kubernetes Opportunities, Challenges Escalated in 2019
If 2018 was the year that Kubernetes broke into the mainstream, then 2019 was the year that reality set in. And that reality is that while Kubernetes is awesome, itâs also hard.
 The Kubernetes ecosystem did its usual part in feeding the market by staying on track in rolling out quarterly updates to the platform. And that feeding has helped Kubernetes continue to steamroll the cloud market. However, ongoing security and commercialization challenges showed that growth is not coming without challenges.
 Kubernetes 2019: Ecosystem Explosion
Kubernetes continued to draw interest from just about any company associated with the cloud space. This was evident by the most recent KubeCon + CloudNativeCon event in San Diego that drew more than 12,000 attendees. That was a 50% increase from the previous event held in North America.
 The Cloud Native Computing Foundation (CNCF), which houses the open source project, found in its first Project Journey report that Kubernetes had 315 companies contributing to the project with âseveral thousand having committed over the life of the project.â That was a significant increase from the 15 that were contributing prior to CNCF adopting the project in early 2016.
 Including individual contributors, Kubernetes counted about 24,000 total contributors since being adopted by CNCF, 148,000 code commits, 83,000 pull requests, and 1.1 million total contributions. âIt is the second- or third-highest velocity open source project depending on how you count it â up there with Linux and React,â explained CNCF Executive Director Dan Kohn in an interview.
By clicking the link, I consent to share my contact information with the sponsor(s) of this content, who may reach out to you as part of their marketing campaigns, and register for SDxCentral email communications. See how we use your data: Privacy Policy.
 Security Surprises
Along with that growth has come an increased focus on platform security. This feeds into what remains one of the biggest concerns for enterprises that want to drive Kubernetes deeper into their operations.
 Hindering that drive were the discovering over the past year of a number of high-profile security lapses that tested the overall confidence in the platform.
 Perhaps the most troubling flaw found was one in the Kubernetes kubectl command-line tool, which is the tool that allows running commands against a Kubernetes cluster to deploy applications, inspect and manage cluster resources, and view logs. If breached, the exploit could allow an attacker to use an infected container to replace or create new files on a userâs workstation.
 The biggest challenge with this particular bug was that the vulnerability was discovered earlier in the year and that it continued to exist even after a patch had been sent out to remediate the issue. âThe original fix for that issue was incomplete and a new exploit method was discovered,â wrote Joel Smith, who works with the Kubernetes Product Security Committee, in a message post.
 More recently, an API vulnerability was discovered that if exploited would allow an attacker to launch a denial-of-service (DoS) hack amusingly dubbed âbillion laughsâ attack.
 The CNCF has moved aggressively to head off security concerns. This year it released a security audit that found dozens of security vulnerabilities in the container orchestration platform. These included five high-severity issues and 17 medium-severity issues. Fixes for those issues have been deployed.
 The overall size and operational complexity of Kubernetes was cited as being a key reason for these security holes.
 âThe assessment team found configuration and deployment of Kubernetes to be non-trivial, with certain components having confusing default settings, missing operational controls, and implicitly defined security controls,â the audit explained.
 It also found that the extensive Kubernetes codebase lacks detailed documentation to guide administrators and developers in setting up a robust security posture.
 âThe codebase is large and complex, with large sections of code containing minimal documentation and numerous dependencies, including systems external to Kubernetes,â the audit noted. âThere are many cases of logic re-implementation within the codebase, which could be centralized into supporting libraries to reduce complexity, facilitate easier patching, and reduce the burden of documentation across disparate areas of the codebase.â
 Despite those concerns, the audit did find that Kubernetes does streamline âdifficult tasks related to maintaining and operating cluster workloads such as deployments, replication, and storage management.â The use of role-based access controls (RBAC) also allows users an avenue to increase security.
 Go-to-Market
Shoring up the security component is an important task for the Kubernetes ecosystem, but not the only one that continues to hinder broader deployments. While seemingly everyone wants to adopt Kubernetes, it remains a complex challenge for many.
 This particular problem has been good for some vendors that have been able to use that complexity to drive their business. Kubernetes in 2019 witnessed billions of dollars thrown at established brands and startups through mergers and acquisitions or venture capital funding.
 Highlights of this growth include the $34 billion IBM forked over to buy Red Hat, which closed this year, and the several billion dollars VMware spent to bolster its Kubernetes assets.
 While some have managed to strike gold with Kubernetes, others have floundered under its shadow.
 Docker Inc., which developed the open source container platform that instigated the Kubernetes revolution, was recently forced to sell its Kubernetes-focused enterprise management business because it could not make a go of it in an increasingly crowded market.
 Analysts noted that Docker Inc.âs push to make container adoption easier was also part of its downfall. âIn a sense, Docker is almost a victim of its own success,â Jay Lyman, research analyst at 451 Research, recently told SDxCentral. âIt democratized containers and made them easier to use.â
 Others felt the same pressure.
 Mesosphere, which was one of the first vendors to release a container orchestration platform with its Marathon product that ran inside of DC/OS, changed its name to D2IQ. That move came under the auspice of changing its focus from helping companies set up their cloud-native infrastructure to âday twoâ (D2) challenges of running that infrastructure in a production environment (IQ).
 Smaller startup Containership also succumbed, announcing that it was closing up shop after being unable to monetize its operations in light of Kubernetesâ rise. This included a failed attempt to pivot its Containership Cloud operations toward a more Kubernetes-focused platform.
 Edging Toward the Edge
Kubernetes might have made it difficult for some to compete, but that does not mean there is not still more room for growth. One Kubernetes area that gained momentum in 2019 was around edge.
 This opportunity is being driven by the growing need to extend the reach of networks toward the end user. This is necessary to support potentially lucrative low-latency use cases.
 A recent report from Mobile Experts predicts the edge computing market will grow 10-fold by 2024. It notes that the edge computing trend expands from centralized hyperscale data centers to distributed edge cloud nodes, with capex spend on near edge data centers representing the largest segment of the market.
 A number of vendors repackaged Kubernetesâ core in a way that allows the platform to operate in resource-constrained environments. That slimness is important because edge locations are more resource constrained compared with data center or network core locations.
 Vendors like Rancher Labs, CDNetworks, and Edgeworx all rolled out platforms built on variations of Kubernetes that can live in these environments.
 Other vendors have been plugging the full Kubernetes platform into their efforts.
 Mirantis last year plugged Kubernetes into its Cloud Platform Edge product to allows operators to deploy a combination of containers, virtual machines (VMs), and bare metal points of presence (PoPs) that are connected by a unified management plane.
 Similarly, IoTium last year updated its edge-cloud infrastructure that is built on remotely-managed Kubernetes. The platform places Kubernetes at an edge location where it can be inside a node. The company uses a full version of Kubernetes running on IoTiumâs SD-WAN platform.
Basic & Advanced Kubernetes Training using cloud computing, AWS, Docker etc. in Mumbai. Advanced Containers Domain is used for 25 hours Kubernetes Training.
 There is also the KubeEdge open source project that supports edge nodes, applications, devices, and cluster management consistent with the Kuberenetes interface. This can help an edge cloud act exactly like a cloud cluster.
 And of course ⌠5G
And the full Kubernetes stack is also being angled toward 5G deployments.
 The Linux Foundationâs LF Networking group conducted a live demo of a Kubernetes-powered end-to-end 5G cloud native network at the KubeCon + CloudNativeCon North America event that showed significant progress toward what future open source telecom deployments could look like.
 Heather Kirksey, VP of community and ecosystem development at the Linux Foundation, said the demo was important due to the growing amount of work around networking issues and Kubernetes. The container orchestration platform is being tasked with managing the container-based infrastructure that will be needed to support the promise of 5G networks.
 âWe are embracing cloud native and new applications and we want to let the folks here know why we want to partner with the cloud native developer community,â Kirksey said. âIt has been a bit of a challenge to get that community excited about telecom and to get excited about working with us to advance networking.â
 That Kubernetes focus on 5G telecom was echoed at the event by Craig McLuckie, VP of product and development at VMware, during an interview with SDxCentral. McLuckie, who was formerly at Google where he worked on its Compute Engine and the platform that eventually became the Kubernetes project, said that 5G will âbe a fantastic and interesting challenge for the Kubernetes community and the communityâs codebase in how they might solve this.â
 The past year did indeed show that while Kubernetes has gained a certain stature, it remains a strong center of development and opportunity. The big challenge now will be in how the ecosystem deals with that success and opportunities in 2020.[Source]- https://www.sdxcentral.com/articles/news/kubernetes-opportunities-challenges-escalated-in-2019/2019/12/
0 notes
Text
Original Post from InfoSecurity Magazine Author:
Researchers Find 40,000+ Containers Exposed Online
Researchers have discovered over 40,000 Kubernetes and Docker container hosting devices exposed to the public internet through misconfigurations.
Palo Alto Networksâ Unit 42 revealed the results of its latest research in a blog post yesterday. The discovery was made via a simple Shodan search.
Some 23,353 Kubernetes containers were found in this way, located mainly in the US, as well as Ireland, Germany, Singapore, and Australia. Even more (23,354) misconfigured Docker containers were discovered exposed to the internet, mainly in China, the US, Germany, Hong Kong and France.
âThis does not necessarily mean that each of these 40,000+ platforms are vulnerable to exploits or even the leakage of sensitive data: it simply highlights that seemingly basic misconfiguration practices exist and can make organizations targets for further compromising events,â explained senior threat researcher, Nathaniel Quist.
âSeemingly simple misconfigurations within cloud services can lead to severe impacts on organizations.â
This has happened several times in the past: attackers exploited weak security configurations to steal keys and tokens for 190,000 Docker Hub accounts, while poor container security also led to a major breach of 13 million user records at Ladders.
Digging down into the exposed containers they found, the Palo Alto researchers discovered unprotected databases, in one case exposing multiple email addresses.
âMisconfigurations such as using default container names and leaving default service ports exposed to the public leave organizations vulnerable to targeted reconnaissance,â Quist concluded.
âUsing the proper network policies, or firewalls can prevent internal resources from being exposed to the public internet. Additionally, investing in cloud security tools can alert organizations to risks within their current cloud infrastructure.â
Some 60% of US organizations experienced security incidents related to their use of containers over the previous year, according to research from Tripwire released in January.
#gallery-0-6 { margin: auto; } #gallery-0-6 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 33%; } #gallery-0-6 img { border: 2px solid #cfcfcf; } #gallery-0-6 .gallery-caption { margin-left: 0; } /* see gallery_shortcode() in wp-includes/media.php */
Go to Source Author: Researchers Find 40,000+ Containers Exposed Online Original Post from InfoSecurity Magazine Author: Researchers Find 40,000+ Containers Exposed Online Researchers have discovered over 40,000 Kubernetes and Docker container hosting devices exposed to the public internet through misconfigurations.
0 notes
Text
The New Network as a Sensor
Before we get into this, we need to talk about what the network as a sensor was before it was new. Conceptually, instead of having to install a bunch of sensors to generate telemetry, the network itself (routers, switches, wireless devices, etc.) would deliver the necessary and sufficient telemetry to describe the changes occurring on the network to a collector and then Stealthwatch would make sense of it.
The nice thing about the network as a sensor is that the network itself is the most pervasive. So, in terms of an observable domain and the changes within that domain, it is a complete map. This was incredibly powerful. If we go back to when NetFlow was announced, letâs say a later version like V9 or IPfix, we had a very rich set of telemetry coming from the routers and switches that described all the network activity. Whoâs talking to whom, for how long, all the things that we needed to detect insider threats and global threats. The interesting thing about this telemetry model is that threat actors canât hide in it. They need to generate this stuff or itâs not actually going to traverse the network. Itâs a source of telemetry thatâs true for both the defender and the adversary.
The Changing Network
Networks are changing. The data centers we built in the 90âs and 2000âs and the enterprise networking we did back then is different from what weâre seeing today. Certainly, there is a continuum here by which you as the customer happen to fall into. You may have fully embraced the cloud, held fast to legacy systems, or still have a foot in both to some degree. When we look at this continuum we see the origins back when compute was very physical â so called bare metal, imaging from optical drives was the norm, and rack units were a very real unit of measure within your datacenter. We then saw a lot of hypervisors when the age of VMware and KVM came into being. The network topology changed because the guest to guest traffic didnât really touch any optical or copper wire but essentially traversed what was memory inside that host to cross the hypervisor. So, Stealthwatch had to adapt and make sure something was there to observe behavior and generate telemetry.
Moving closer to the present we had things like cloud native services, where people could just get their guest virtual machines from their private hypervisors and run them on the service providers networked infrastructure. This was the birth of the public cloud and where the concept of infrastructure as a service (IaaS) began. This was also how a lot of people, including Cisco Services and many of the services you use today, are run to this day. Recently, weâve seen the rise of Docker containers, which in turn gave rise to the orchestration of Kubernetes. Now, a lot of people have systems running in Kubernetes with containers that run at incredible scale that can adapt to the changing workload demand. Finally, we have serverless. When you think of the network as a sensor, you have to think of the network in these contexts and how it can actually generate telemetry. Stealthwatch is always there to make sense of that telemetry and deliver the analytic outcome of discovering insider threats and global threats. Think of Stealthwatch as the general ledger of all of the activity that takes place across your digital business.
Now that weâve looked at how networks have evolved, weâre going to slice the new network as a sensor into three different stories. In this blog, weâll look at two of these three transformative trends that are in everyoneâs life to some degree. Typically, when we talk about innovation, weâre talking about threat actors and the kinds of threats we face. When threats evolve, defenders are forced to innovate to counter. Here however, Iâm talking about transformative changes that are important to your digital business in varying ways. Weâre going to take them one by one and explain what they are and how they change what the network is and how it can be a sensor to you.
Cloud Native Architecture
Now weâre seeing the dawn of serverless via things like AWS: Lambda. For those that arenât familiar, think of serverless as something like Uber for code. You donât want to own a car or learn how to drive but just want to get to your destination. The same concept applies to serverless. You just want your code to run and you want the output. Everything else, the machine, the supporting applications, and everything that runs your code is owned and operated by the service provider. In this particular situation, things change a lot. In this instance, you donât own the network or the machines. Serverless computing is a cloud computing execution model in which cloud solution providers dynamically manage the allocation of machine resources (i.e. the servers).
So how do you secure a server when thereâs no server?
Stealthwatch Cloud does it by dynamically modeling the server (that does not exist) and holds that state overtime as it analyzes changes being described by the cloud-native telemetry. We take in a lot of metadata and we build a model for in this case a server and overtime s everything changes around this model, weâre holding state as if there really was a server. We perform the same type of analytics trying to detect potential anomalies that would be of interest to you.
In this image you can see that the modeled device has, in a 24-hour period, changed IP address and even its virtual interfaces whereby IP addresses can be assigned. Stealthwatch Cloud creates a model of a server to solve the serverless problem and treats it like any other endpoint on your digital business that you manage.
This âentity modelingâ that Stealthwatch Cloud performs is critical to the analytics of the future because even in this chart, you would think you are just managing that bare metal or virtual server over long periods of time. But believe it or not, these traffic trends represent a server that was never really there! Entity modeling allows us to perform threat analytics within cloud native serverless environments like these. Entity modeling is one of the fundamental technologies in Stealthwatch and you can find out more about it here.
Weâre not looking at blacklists of things like IP addresses of threat actors or fully qualified domain names. Thereâs not a list of bad things, but rather telling you an event of interest that has not yet made its way to a list. It catches things that you did not know to even put on a list â things in potential gray areas that really should be brought to your attention.
If you currently have serverless workloads, or if your developers are starting to play with them, take Stealthwatch Cloud for a spin! There is a 60-day free trial that takes less time to setup than getting a coffee.
Software Defined Networks: Underlay & Overlay Networks
When we look at overlay networks weâre really talking about software defined networks and the encapsulation that happens on top of them. The oldest of which I think would be Multiprotocol Label Switching (MPLS) but today you have techniques like VXLAN and TrustSec. The appeal is that instead of having to renumber your network to represent your segmentation, you use encapsulation to express the desired segmentation policy of the business. The overlay network uses encapsulation to define policy thatâs not based on destination-based routing but labels. When we look at something like SDWAN, you basically see what in traditional network architectural models changing. You still have the access-layer or edge for your network but everything else in the middle is now a programmable mesh whereby you can just concentrate on your access policy and not the complexity of the underlayâs IP addressing scheme.
For businesses that have fully embraced software defined networking or any type, the underlay is a lie! The underlay is still an observational domain for change and the telemetry is still valid, but it does not represent what is going on with the overlay network and for this there is either a way to access the native telemetry of the overlay or you will need sensors that can generate telemetry that include the overlay labeling.
Enterprise networking becomes about as easy to setup as a home network which is an incredibly exciting prospect. Whether your edge is a regular branch office, a hypervisor on a private cloud, an IAS in a public cloud, etc. as it enters the world or the rest of the Internet it crosses an overlay network that describes where things should go and provisions the necessary virtual circuits. When we look at how this relates to Stealthwatch there are a few key things to consider. Stealthwatch is getting the underlay information from NetFlow or IPfix. If it has virtual sensors that are sitting on hypervisors or things of that nature, it can interpret the overlay labels (or tags) faithfully representing the overlay. Lastly, Stealthwatch is looking to interface with the actual software define networking (SDN) controller so it can then make sense of the overlay. The job of Stealthwatch is to put together the entire story of who is talking to whom and for how long by taking into account not just the underlay but also the overlay.
Conclusion
This concludes the first part of our look at the new network as a sensor. Hopefully we have described the changes over the years that separate the old from new and how the Stealthwatch product has had to adapt to continue to deliver high fidelity threat analytics. Threat actors get excited about these changes too because itâs a new place for them to hide and persist in your network. Todayâs digital businesses must embrace these changes but to remain secure you must not allow the threat actors anywhere to hide making visibility key. Stealthwatch understands this and is ready when you are. In part two, weâll look at the role of another transformative trends that are shaping the way we look at modern networking.
The New Network as a Sensor published first on https://brightendentalhouston.tumblr.com/
0 notes
Text
The New Network as a Sensor
Before we get into this, we need to talk about what the network as a sensor was before it was new. Conceptually, instead of having to install a bunch of sensors to generate telemetry, the network itself (routers, switches, wireless devices, etc.) would deliver the necessary and sufficient telemetry to describe the changes occurring on the network to a collector and then Stealthwatch would make sense of it.
The nice thing about the network as a sensor is that the network itself is the most pervasive. So, in terms of an observable domain and the changes within that domain, it is a complete map. This was incredibly powerful. If we go back to when NetFlow was announced, letâs say a later version like V9 or IPfix, we had a very rich set of telemetry coming from the routers and switches that described all the network activity. Whoâs talking to whom, for how long, all the things that we needed to detect insider threats and global threats. The interesting thing about this telemetry model is that threat actors canât hide in it. They need to generate this stuff or itâs not actually going to traverse the network. Itâs a source of telemetry thatâs true for both the defender and the adversary.
The Changing Network
Networks are changing. The data centers we built in the 90âs and 2000âs and the enterprise networking we did back then is different from what weâre seeing today. Certainly, there is a continuum here by which you as the customer happen to fall into. You may have fully embraced the cloud, held fast to legacy systems, or still have a foot in both to some degree. When we look at this continuum we see the origins back when compute was very physical â so called bare metal, imaging from optical drives was the norm, and rack units were a very real unit of measure within your datacenter. We then saw a lot of hypervisors when the age of VMware and KVM came into being. The network topology changed because the guest to guest traffic didnât really touch any optical or copper wire but essentially traversed what was memory inside that host to cross the hypervisor. So, Stealthwatch had to adapt and make sure something was there to observe behavior and generate telemetry.
Moving closer to the present we had things like cloud native services, where people could just get their guest virtual machines from their private hypervisors and run them on the service providers networked infrastructure. This was the birth of the public cloud and where the concept of infrastructure as a service (IaaS) began. This was also how a lot of people, including Cisco Services and many of the services you use today, are run to this day. Recently, weâve seen the rise of Docker containers, which in turn gave rise to the orchestration of Kubernetes. Now, a lot of people have systems running in Kubernetes with containers that run at incredible scale that can adapt to the changing workload demand. Finally, we have serverless. When you think of the network as a sensor, you have to think of the network in these contexts and how it can actually generate telemetry. Stealthwatch is always there to make sense of that telemetry and deliver the analytic outcome of discovering insider threats and global threats. Think of Stealthwatch as the general ledger of all of the activity that takes place across your digital business.
Now that weâve looked at how networks have evolved, weâre going to slice the new network as a sensor into three different stories. In this blog, weâll look at two of these three transformative trends that are in everyoneâs life to some degree. Typically, when we talk about innovation, weâre talking about threat actors and the kinds of threats we face. When threats evolve, defenders are forced to innovate to counter. Here however, Iâm talking about transformative changes that are important to your digital business in varying ways. Weâre going to take them one by one and explain what they are and how they change what the network is and how it can be a sensor to you.
Cloud Native Architecture
Now weâre seeing the dawn of serverless via things like AWS: Lambda. For those that arenât familiar, think of serverless as something like Uber for code. You donât want to own a car or learn how to drive but just want to get to your destination. The same concept applies to serverless. You just want your code to run and you want the output. Everything else, the machine, the supporting applications, and everything that runs your code is owned and operated by the service provider. In this particular situation, things change a lot. In this instance, you donât own the network or the machines. Serverless computing is a cloud computing execution model in which cloud solution providers dynamically manage the allocation of machine resources (i.e. the servers).
So how do you secure a server when thereâs no server?
Stealthwatch Cloud does it by dynamically modeling the server (that does not exist) and holds that state overtime as it analyzes changes being described by the cloud-native telemetry. We take in a lot of metadata and we build a model for in this case a server and overtime s everything changes around this model, weâre holding state as if there really was a server. We perform the same type of analytics trying to detect potential anomalies that would be of interest to you.
In this image you can see that the modeled device has, in a 24-hour period, changed IP address and even its virtual interfaces whereby IP addresses can be assigned. Stealthwatch Cloud creates a model of a server to solve the serverless problem and treats it like any other endpoint on your digital business that you manage.
This âentity modelingâ that Stealthwatch Cloud performs is critical to the analytics of the future because even in this chart, you would think you are just managing that bare metal or virtual server over long periods of time. But believe it or not, these traffic trends represent a server that was never really there! Entity modeling allows us to perform threat analytics within cloud native serverless environments like these. Entity modeling is one of the fundamental technologies in Stealthwatch and you can find out more about it here.
Weâre not looking at blacklists of things like IP addresses of threat actors or fully qualified domain names. Thereâs not a list of bad things, but rather telling you an event of interest that has not yet made its way to a list. It catches things that you did not know to even put on a list â things in potential gray areas that really should be brought to your attention.
If you currently have serverless workloads, or if your developers are starting to play with them, take Stealthwatch Cloud for a spin! There is a 60-day free trial that takes less time to setup than getting a coffee.
Software Defined Networks: Underlay & Overlay Networks
When we look at overlay networks weâre really talking about software defined networks and the encapsulation that happens on top of them. The oldest of which I think would be Multiprotocol Label Switching (MPLS) but today you have techniques like VXLAN and TrustSec. The appeal is that instead of having to renumber your network to represent your segmentation, you use encapsulation to express the desired segmentation policy of the business. The overlay network uses encapsulation to define policy thatâs not based on destination-based routing but labels. When we look at something like SDWAN, you basically see what in traditional network architectural models changing. You still have the access-layer or edge for your network but everything else in the middle is now a programmable mesh whereby you can just concentrate on your access policy and not the complexity of the underlayâs IP addressing scheme.
For businesses that have fully embraced software defined networking or any type, the underlay is a lie! The underlay is still an observational domain for change and the telemetry is still valid, but it does not represent what is going on with the overlay network and for this there is either a way to access the native telemetry of the overlay or you will need sensors that can generate telemetry that include the overlay labeling.
Enterprise networking becomes about as easy to setup as a home network which is an incredibly exciting prospect. Whether your edge is a regular branch office, a hypervisor on a private cloud, an IAS in a public cloud, etc. as it enters the world or the rest of the Internet it crosses an overlay network that describes where things should go and provisions the necessary virtual circuits. When we look at how this relates to Stealthwatch there are a few key things to consider. Stealthwatch is getting the underlay information from NetFlow or IPfix. If it has virtual sensors that are sitting on hypervisors or things of that nature, it can interpret the overlay labels (or tags) faithfully representing the overlay. Lastly, Stealthwatch is looking to interface with the actual software define networking (SDN) controller so it can then make sense of the overlay. The job of Stealthwatch is to put together the entire story of who is talking to whom and for how long by taking into account not just the underlay but also the overlay.
Conclusion
This concludes the first part of our look at the new network as a sensor. Hopefully we have described the changes over the years that separate the old from new and how the Stealthwatch product has had to adapt to continue to deliver high fidelity threat analytics. Threat actors get excited about these changes too because itâs a new place for them to hide and persist in your network. Todayâs digital businesses must embrace these changes but to remain secure you must not allow the threat actors anywhere to hide making visibility key. Stealthwatch understands this and is ready when you are. In part two, weâll look at the role of another transformative trends that are shaping the way we look at modern networking.
The New Network as a Sensor published first on https://medium.com/@JioHowpage
0 notes
Text
Top data links this week - Google AI & facial recognition
Google announced last week that it would renounce Project Maven, their joint project with the US military. The company made a billion dollar cut in their turnover to respond to a widespread petition within the company not to use Google Tech to make weapons more accurate. This is an impressive victory for Google employees. The article by Jacobin Magazine tells the story of how the employees organised to oppose the project; and won! As part of their response, Google have also released their AI principles - with what is perhaps their new moto "Be socially beneficial", a nice reminder of the original "do no evil."
Our top data links this week đ
- Can't be done What tech calls "AI" isnât really AI
-Â Political AIÂ How The New York Times Uses Software To Recognize Members of Congress
- Getting the right food Food Discovery with Uber Eats: Building a Query Understanding Engine
-Â Awesome Dataviz of the week: Working Remotely and Where the Time Goes
-Â Finally in the USAÂ US government to use facial recognition technology at Mexico border crossing
-Â Data project of the week: Where Killings Go Unsolved by the Washington Post
Here are some non-GDPR related articles for you to read. - How can Santa keep his lists when GDPR is around? by Worldbuilding "I haven't been notified by Santa and/or his elves that he is collecting data about me. And mind you: my name and surname are my personal data, not to mention data on whether I have been good or naughty." - Microsoft sinks data centre off Orkney by BBC News "The data centre, a white cylinder containing computers, could sit on the sea floor for up to five years." - Machine Learning: how to go from Zero to Hero by FreeCodeCamp "If your understanding of A.I. and Machine Learning is a big question mark, then this is the blog post for you. Here, I gradually increase your Awesomenessicity⢠by gluing inspirational videos together with friendly text." - How Policymakers Can Foster Algorithmic Accountability by Center for Data Innovation "Policymakers should reject these proposals and instead support algorithmic decision-making by promoting policies that ensure its robust development and widespread adoption." - Counterintuitive examples in probability by Mathematics "For a simple random walk, the mean number of visits to point b before returning to the origin is equal to 1 for every bâ 0." - Reinforcement Learning from scratch by Insight "Contrary to many classical Deep Learning problems that often focus on perception (does this image contain a stop sign?), Deep RL adds the dimension of actions that influence the environment (what is the goal, and how do I get there?). - Machine learning explained with gifs: style transfer by Eliot Andres "Pioneered in 2015, style transfer is a concept that uses transfers the style of a painting to an existing photography, using neural networks." - Kubernetes best practices: upgrading your clusters with zero downtime by Google Cloud Platform Blog "Today is the final installment in a seven-part video and blog series from Google Developer Advocate Sandeep Dinesh on how to get the most out of your Kubernetes environment."
0 notes
Quote
The new release is focused on providing the scalability, management and security capabilities required to support Kubernetes at edge scale. A headline enhancement is support for one million clusters (currently available in preview). For general availability, the product now supports two thousand clusters and one hundred thousand nodes. Another enhancement is limited connectivity maintenance with K3s. Designed for cluster management, upgrades and patches where clusters may not have fixed or stable network connection, Rancher 2.4 can kick off an upgrade remotely, but the process is managed on local K3s clusters, allowing users to manage upgrades and patches locally and then synchronise with the management server once connectivity is restored. Rancher 2.4 also enables zero downtime maintenance, allowing organisations to upgrade Kubernetes clusters and nodes without application interruption. Additionally, users can select and configure their upgrade strategy for add-ons so that DNS and Ingress do not experience service disruption. Rancher 2.4 introduces CIS Scan, which allows users to run ad-hoc security scans of their RKE clusters against CIS benchmarks published by the Centre for Internet Security. Users can create custom test configurations and generate reports illustrating pass/fail information from which they can take corrective action to ensure their clusters meet security requirements. Rancher 2.4 is available in a hosted Rancher deployment, in which each customer has a dedicated AWS instance of a Rancher Server management control plan. The hosted offering includes a full-featured Rancher server, delivers a 99.9% SLA and automates upgrades, security patches and backups. Downstream clusters (e.g. GKE, AKS) are not included in the SLA and continue to be operated by the respective distribution provider. Several best practices were followed during the hosted Rancher build, including infrastructure as code (IaC), immutable infrastructure and a 'Shift Left' approach. Packer, Terraform and GitHub were chosen for tooling. Rancher delivers a consistent Kubernetes management experience for all certified distributions, including RKE, K3s, AKS, EKS, and GKE on-premise, cloud and/or edge. InfoQ spoke to Sheng Liang, CEO and co-founder of Rancher Labs, about the announcement: InfoQ: What is 'the edge'? Sheng Liang: When talking about the edge, people typically mean small and standalone computing resources like set-top boxes, ATM machines, and IoT gateways. In the broadest sense, however, you can think of the edge as any computing resource that is not in the cloud. So, not only do branch offices constitute part of your edge locations, developer laptops are also part of the device edge, and legacy on-premises systems could be considered the data centre edge. InfoQ: What's the difference between K3s and K8s? Liang: K3s adds specialised configurations and components to K8s so that it can be easily deployed and managed on edge devices. For example, K3s introduces a number of configuration database options beyond the standard etcd key-value store to make Kubernetes easier to operate in resource-constrained environments. K8s is often operated by dedicated DevOps engineers or SREs, whereas K3s is packaged as a single binary and can be deployed with applications or embedded in servers. InfoQ: Please can you explain the RKE strategy? Liang: RKE is Rancher's Kubernetes distribution for data centre deployments. It is a mature, stable, enterprise grade, and easy-to-use Kubernetes distribution. It has been in production and used by large enterprise customers for years. Going forward, we plan to incorporate many of the more modern Kubernetes operations enhancements developed in K3s into RKE 2.0. InfoQ: Why are people concerned about security in Kubernetes? Liang: As a new layer of software running between the applications and the underlying infrastructure, Kubernetes has a huge impact on the overall security of the system. On one hand, Kubernetes brings enhanced security by introducing opportunities to check, validate, encrypt, control, and lockdown application workload and the underlying infrastructure. On the other hand, a misconfigured Kubernetes could introduce additional security holes in the overall technology stack. It is therefore essential for Kubernetes management platforms like Rancher to ensure 1) Kubernetes clusters are configured securely (using for example, CIS benchmarks) and 2) applications take advantage of the numerous security enhancements offered by Kubernetes. InfoQ: What are the typical security requirements a Kubernetes cluster needs to comply with? Liang: At the most basic level, every Kubernetes cluster needs to have proper authentication, role-based access control, and secret management. When an enterprise IT organisation manages many different clusters, they need to make sure to have centralised policy management across all clusters. An enterprise IT organisation, for example, can mandate a policy that all production Kubernetes clusters have the necessary security tools (e.g., Aqua or Twistlock) installed. InfoQ: If teams want Rancher hosted on Azure or GCP can they have that? Liang: As open source software, Rancher can be installed on any infrastructure, including AWS, Azure, and GCP. In that case though the users have to operate Rancher themselves. The initial launch of hosted Rancher in Rancher 2.4 only runs on AWS. We plan to launch hosted Rancher in Azure and GCP in the future. InfoQ: How is it that Rancher is able to support such a wide range of Kubernetes distributions? Liang: Rancher is able to support any Kubernetes distribution because Kubernetes is the standard for computing. All Kubernetes distribution vendors today commit to running the same upstream Kubernetes code and to passing the same CNCF-defined compliance tests. Rancher is then able to take advantage of the portability guarantee of Kubernetes to create a seamless computing experience that spans the data centre, cloud, and edge. Rancher does not attempt to create a vertically locked-in technology stack that ties Rancher Kubernetes management with Rancher Kubernetes distribution. InfoQ: What are the geographies that Rancher is targeting for expansion and how will this happen? Liang: As an open source project, Rancher is adopted by Kubernetes users worldwide. Rancher today has commercial operations in fourteen countries across the Americas, Europe, Africa, and the Asia Pacific region. Our geographic presence will continue to grow as we generate significant amounts of enterprise subscription business in more countries. InfoQ: What proportion of enterprise applications currently run on Kubernetes and what's the forecast for growth? Liang: Despite the rapidly rising popularity of Kubernetes, the proportion of enterprise applications running on Kubernetes is still small among Rancher customers. Rancher customers have reported low single digits percentage of applications running on Kubernetes, which represents tremendous upside growth potential for Rancher.
http://damianfallon.blogspot.com/2020/04/rancher-24-supports-1-million.html
0 notes